A Convex Exemplar-based Approach to MAD-Bayes Dirichlet Process Mixture Models

ثبت نشده
چکیده

Appendix A. Notation N = {1, 2, ..., N} =: [N ] is the whole set of data points. i, j ∈ N denote points. dij := D(xi,xj). D is the number of data sets. Td ⊆ N denotes the set of points in the d-th dataset, i.e. ∪d=1Td = N . Nd = |Td| is the number of points in Dataset d. d(i) ∈ [D] denotes the dataset index of Point i. M ⊆ N is the set of medoids. k, l ∈ M denote clusters and themselves are medoids. Sk is the set of points in Cluster k. Nk = |Sk| is the number of points in Cluster k. M(i) ∈ M denotes the cluster/representative of Point i. Let Dk ⊆ [D] denote the data sets contained or partially contained in Cluster k. Denote Sk,d := Sk ∩ Td for d ∈ Dk. Thus ∪d∈DkSk,d = Sk. Denote Nk,d := |Sk,d| for d ∈ Dk. B. Proof of Theorem 1 Theorem 1 is a direct corollary of Theorem 2, by setting θ = 0.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Convex Exemplar-based Approach to MAD-Bayes Dirichlet Process Mixture Models

MAD-Bayes (MAP-based Asymptotic Derivations) has been recently proposed as a general technique to derive scalable algorithm for Bayesian Nonparametric models. However, the combinatorial nature of objective functions derived from MAD-Bayes results in hard optimization problem, for which current practice employs heuristic algorithms analogous to k-means to find local minimum. In this paper, we co...

متن کامل

Unobserved Heterogeneity in Longitudinal Data An Empirical Bayes Perspective

Abstract. Empirical Bayes methods for Gaussian and binomial compound decision problems involving longitudinal data are considered. A new convex optimization formulation of the nonparametric (Kiefer-Wolfowitz) maximum likelihood estimator for mixture models is used to construct nonparametric Bayes rules for compound decisions. The methods are illustrated with some simulation examples as well as ...

متن کامل

Introducing of Dirichlet process prior in the Nonparametric Bayesian models frame work

Statistical models are utilized to learn about the mechanism that the data are generating from it. Often it is assumed that the random variables y_i,i=1,…,n ,are samples from the probability distribution F which is belong to a parametric distributions class. However, in practice, a parametric model may be inappropriate to describe the data. In this settings, the parametric assumption could be r...

متن کامل

MAD-Bayes: MAP-based Asymptotic Derivations from Bayes

The classical mixture of Gaussians model is related to K-means via small-variance asymptotics: as the covariances of the Gaussians tend to zero, the negative log-likelihood of the mixture of Gaussians model approaches the K-means objective, and the EM algorithm approaches the K-means algorithm. Kulis & Jordan (2012) used this observation to obtain a novel K-means-like algorithm from a Gibbs sam...

متن کامل

Spike-and-Slab Dirichlet Process Mixture Models

In this paper, Spike-and-Slab Dirichlet Process (SS-DP) priors are introduced and discussed for non-parametric Bayesian modeling and inference, especially in the mixture models context. Specifying a spike-and-slab base measure for DP priors combines the merits of Dirichlet process and spike-and-slab priors and serves as a flexible approach in Bayesian model selection and averaging. Computationa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015